Chernoff's Inequality
   HOME

TheInfoList



OR:

In probability theory, the Chernoff bound gives exponentially decreasing bounds on tail distributions of sums of independent random variables. Despite being named after Herman Chernoff, the author of the paper it first appeared in, the result is due to Herman Rubin. It is a sharper bound than the first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, the Chernoff bound requires the variates to be independent, a condition that is not required by either Markov's inequality or Chebyshev's inequality (although Chebyshev's inequality does require the variates to be pairwise independent). The Chernoff bound is related to the Bernstein inequalities, which were developed earlier, and to Hoeffding's inequality.


The generic bound

The generic Chernoff bound for a random variable is attained by applying Markov's inequality to . This gives a bound in terms of the moment-generating function of . For every t > 0: :\Pr(X \geq a) = \Pr(e^ \geq e^) \leq \frac. Since this bound holds for every positive t, we have: :\Pr(X \geq a) \leq \inf_ \frac. The Chernoff bound sometimes refers to the above inequality, which was first applied by Sergei Bernstein to prove the related Bernstein inequalities. It is also used to prove Hoeffding's inequality,
Bennett's inequality In probability theory, Bennett's inequality (mathematics), inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality wa ...
, and
McDiarmid's inequality In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent ran ...
. This inequality can be applied generally to various classes of distributions, including sub-gaussian distributions, sub- gamma distributions, and sums of independent random variables. Chernoff bounds commonly refer to the case where X is the sum of independent Bernoulli random variables. When is the sum of independent random variables , the moment generating function of is the product of the individual moment generating functions, giving that By performing the same analysis on the random variable , one can get the same bound in the other direction. : \Pr (X \leq a) \leq \inf_ e^ \prod_i \operatorname E \left ^ \right /math> Specific Chernoff bounds are attained by calculating the moment-generating function \operatorname E \left ^ \right /math> for specific instances of the random variables X_i. The bounds in the following sections for Bernoulli random variables are derived by using that, for a Bernoulli random variable X_i with probability ''p'' of being equal to 1, :\operatorname E \left ^ \right= (1 - p) e^0 + p e^t = 1 + p (e^t -1) \leq e^. One can encounter many flavors of Chernoff bounds: the original ''additive form'' (which gives a bound on the absolute error) or the more practical ''multiplicative form'' (which bounds the error relative to the mean).


Multiplicative form (relative error)

Multiplicative Chernoff bound. Suppose are independent random variables taking values in Let denote their sum and let denote the sum's expected value. Then for any , :\Pr ( X > (1+\delta)\mu) < \left(\frac\right)^\mu. A similar proof strategy can be used to show that :\Pr(X < (1-\delta)\mu) < \left(\frac\right)^\mu. The above formula is often unwieldy in practice, so the following looser but more convenient bounds are often used, which follow from the inequality \textstyle\frac \le \log(1+\delta) from the list of logarithmic inequalities: :\Pr( X \le (1-\delta)\mu) \le e^, \qquad 0 \le \delta, :\Pr( X \ge (1+\delta)\mu)\le e^, \qquad 0 \le \delta, :\Pr( , X - \mu, \ge \delta\mu) \le 2e^, \qquad 0 \le \delta \le 1. Notice that the bounds are trivial for \delta = 0.


Additive form (absolute error)

The following theorem is due to Wassily Hoeffding and hence is called the Chernoff–Hoeffding theorem. :Chernoff–Hoeffding theorem. Suppose are i.i.d. random variables, taking values in Let and . ::\begin \Pr \left (\frac \sum X_i \geq p + \varepsilon \right ) \leq \left (\left (\frac\right )^ ^\right )^n &= e^ \\ \Pr \left (\frac \sum X_i \leq p - \varepsilon \right ) \leq \left (\left (\frac\right )^ ^\right )^n &= e^ \end :where :: D(x\parallel y) = x \ln \frac + (1-x) \ln \left (\frac \right ) :is the Kullback–Leibler divergence between Bernoulli distributed random variables with parameters ''x'' and ''y'' respectively. If then D(p+\varepsilon\parallel p)\ge \tfrac which means :: \Pr\left ( \frac\sum X_i>p+x \right ) \leq \exp \left (-\frac \right ). A simpler bound follows by relaxing the theorem using , which follows from the convexity of and the fact that :\frac D(p+\varepsilon\parallel p) = \frac \geq 4 =\frac(2\varepsilon^2). This result is a special case of Hoeffding's inequality. Sometimes, the bounds : \begin D( (1+x) p \parallel p) \geq \frac x^2 p, & & & \leq x \leq \tfrac,\\ ptD(x \parallel y) \geq \frac, \\ ptD(x \parallel y) \geq \frac, & & & x \leq y,\\ ptD(x \parallel y) \geq \frac, & & & x \geq y \end which are stronger for are also used.


Sums of independent bounded random variables

Chernoff bounds may also be applied to general sums of independent, bounded random variables, regardless of their distribution; this is known as Hoeffding's inequality. The proof follows a similar approach to the other Chernoff bounds, but applying
Hoeffding's lemma In probability theory, Hoeffding's lemma is an inequality that bounds the moment-generating function of any bounded random variable. It is named after the Finnish–American mathematical statistician Wassily Hoeffding. The proof of Hoeffding ...
to bound the moment generating functions (see Hoeffding's inequality). : Hoeffding's inequality. Suppose are independent random variables taking values in Let denote their sum and let denote the sum's expected value. Then for any , ::\Pr (X \le (1-\delta)\mu) < e^, ::\Pr (X \ge (1+\delta)\mu) < e^.


Applications

Chernoff bounds have very useful applications in
set balancing The set balancing problem in mathematics is the problem of dividing a set to two subsets that have roughly the same characteristics. It arises naturally in design of experiments. There is a group of subjects. Each subject has several features, whi ...
and packet
routing Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone netw ...
in sparse networks. The set balancing problem arises while designing statistical experiments. Typically while designing a statistical experiment, given the features of each participant in the experiment, we need to know how to divide the participants into 2 disjoint groups such that each feature is roughly as balanced as possible between the two groups.Refer to thi
book section
for more info on the problem.
Chernoff bounds are also used to obtain tight bounds for permutation routing problems which reduce network congestion while routing packets in sparse networks. Chernoff bounds are used in computational learning theory to prove that a learning algorithm is probably approximately correct, i.e. with high probability the algorithm has small error on a sufficiently large training data set. Chernoff bounds can be effectively used to evaluate the "robustness level" of an application/algorithm by exploring its perturbation space with randomization. The use of the Chernoff bound permits one to abandon the strong—and mostly unrealistic—small perturbation hypothesis (the perturbation magnitude is small). The robustness level can be, in turn, used either to validate or reject a specific algorithmic choice, a hardware implementation or the appropriateness of a solution whose structural parameters are affected by uncertainties. A simple and common use of Chernoff bounds is for "boosting" of
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
s. If one has an algorithm that outputs a guess that is the desired answer with probability ''p'' > 1/2, then one can get a higher success rate by running the algorithm n = \log(1/\delta) 2p/(p - 1/2)^2 times and outputting a guess that is output by more than ''n''/2 runs of the algorithm. (There cannot be more than one such guess by the pigeonhole principle.) Assuming that these algorithm runs are independent, the probability that more than ''n''/2 of the guesses is correct is equal to the probability that the sum of independent Bernoulli random variables that are 1 with probability ''p'' is more than ''n''/2. This can be shown to be at least 1-\delta via the multiplicative Chernoff bound (Corollary 13.3 in Sinclair's class notes, ).: :\Pr\left > \right\ge 1 - e^ \geq 1-\delta


Matrix Chernoff bound

Rudolf Ahlswede Rudolf F. Ahlswede (15 September 1938 – 18 December 2010) was a German mathematician. Born in Dielmissen, Germany, he studied mathematics, physics, and philosophy. He wrote his Ph.D. thesis in 1966, at the University of Göttingen, with th ...
and Andreas Winter introduced a Chernoff bound for matrix-valued random variables. The following version of the inequality can be found in the work of Tropp. Let be independent matrix valued random variables such that M_i\in \mathbb^ and \mathbb _i0. Let us denote by \lVert M \rVert the operator norm of the matrix M . If \lVert M_i \rVert \leq \gamma holds almost surely for all i\in\ , then for every :\Pr\left( \left\, \frac \sum_^t M_i \right\, > \varepsilon \right) \leq (d_1+d_2) \exp \left( -\frac \right). Notice that in order to conclude that the deviation from 0 is bounded by with high probability, we need to choose a number of samples t proportional to the logarithm of d_1+d_2 . In general, unfortunately, a dependence on \log(\min(d_1,d_2)) is inevitable: take for example a diagonal random sign matrix of dimension d\times d . The operator norm of the sum of ''t'' independent samples is precisely the maximum deviation among ''d'' independent random walks of length ''t''. In order to achieve a fixed bound on the maximum deviation with constant probability, it is easy to see that ''t'' should grow logarithmically with ''d'' in this scenario. The following theorem can be obtained by assuming ''M'' has low rank, in order to avoid the dependency on the dimensions.


Theorem without the dependency on the dimensions

Let and ''M'' be a random symmetric real matrix with \, \operatorname E \, \leq 1 and \, M\, \leq \gamma almost surely. Assume that each element on the support of ''M'' has at most rank ''r''. Set : t = \Omega \left( \frac \right). If r \leq t holds almost surely, then :\Pr\left(\left\, \frac \sum_^t M_i - \operatorname E \right\, > \varepsilon \right) \leq \frac where are i.i.d. copies of ''M''.


Sampling variant

The following variant of Chernoff's bound can be used to bound the probability that a majority in a population will become a minority in a sample, or vice versa. Suppose there is a general population ''A'' and a sub-population ''B'' ⊆ ''A''. Mark the relative size of the sub-population (, ''B'', /, ''A'', ) by ''r''. Suppose we pick an integer ''k'' and a random sample ''S'' ⊂ ''A'' of size ''k''. Mark the relative size of the sub-population in the sample (, ''B''∩''S'', /, ''S'', ) by ''rS''. Then, for every fraction ''d'' ∈  ,1 :\Pr\left(r_S < (1-d)\cdot r\right) < \exp\left(-r\cdot d^2 \cdot \frac k 2\right) In particular, if ''B'' is a majority in ''A'' (i.e. ''r'' > 0.5) we can bound the probability that ''B'' will remain majority in ''S''(''rS'' > 0.5) by taking: ''d'' = 1 − 1/(2''r''):See graphs of
the bound as a function of ''r'' when ''k'' changes
an
the bound as a function of ''k'' when ''r'' changes
:\Pr\left(r_S > 0.5\right) > 1 - \exp\left(-r\cdot \left(1 - \frac\right)^2 \cdot \frac k 2 \right) This bound is of course not tight at all. For example, when ''r'' = 0.5 we get a trivial bound Prob > 0.


Proofs


Multiplicative form

Following the conditions of the multiplicative Chernoff bound, let be independent Bernoulli random variables, whose sum is , each having probability ''pi'' of being equal to 1. For a Bernoulli variable: :\operatorname E \left ^ \right= (1 - p_i) e^0 + p_i e^t = 1 + p_i (e^t -1) \leq e^ So, using () with a = (1+\delta)\mu for any \delta>0 and where \mu = \operatorname E = \textstyle\sum_^n p_i, :\begin \Pr (X > (1 + \delta)\mu) &\le \inf_ \exp(-t(1+\delta)\mu)\prod_^n\operatorname
exp(tX_i) Exp may stand for: * Exponential function, in mathematics * Expiry date of organic compounds like food or medicines * Experience points, in role-playing games * EXPTIME In computational complexity theory, the complexity class EXPTIME (sometimes ...
\ pt& \leq \inf_ \exp\Big(-t(1+\delta)\mu + \sum_^n p_i(e^t - 1)\Big) \\ pt& = \inf_ \exp\Big(-t(1+\delta)\mu + (e^t - 1)\mu\Big). \end If we simply set so that for , we can substitute and find :\exp\Big(-t(1+\delta)\mu + (e^t - 1)\mu\Big) = \frac = \left frac\right\mu. This proves the result desired.


Chernoff–Hoeffding theorem (additive form)

Let . Taking in (), we obtain: :\Pr\left ( \frac \sum X_i \ge q\right )\le \inf_ \frac = \inf_ \left ( \frac\right )^n. Now, knowing that , we have :\left (\frac\right )^n = \left (\frac\right )^n = \left ( pe^ + (1-p)e^ \right )^n. Therefore, we can easily compute the infimum, using calculus: :\frac \left (pe^ + (1-p)e^ \right) = (1-q)pe^-q(1-p)e^ Setting the equation to zero and solving, we have :\begin (1-q)pe^ &= q(1-p)e^ \\ (1-q)pe^ &= q(1-p) \end so that :e^t = \frac. Thus, :t = \log\left(\frac\right). As , we see that , so our bound is satisfied on . Having solved for , we can plug back into the equations above to find that :\begin \log \left (pe^ + (1-p)e^ \right ) &= \log \left ( e^(1-p+pe^t) \right ) \\ &= \log\left (e^\right) + \log\left(1-p+pe^e^\right ) \\ &= -q\log\frac -q \log\frac + \log\left(1-p+ p\left(\frac\right)\frac\right) \\ &= -q\log\frac -q \log\frac + \log\left(\frac+\frac\right) \\ &= -q \log\frac + \left ( -q\log\frac + \log\frac \right ) \\ &= -q\log\frac + (1-q)\log\frac \\ &= -D(q \parallel p). \end We now have our desired result, that :\Pr \left (\tfrac\sum X_i \ge p + \varepsilon\right ) \le e^. To complete the proof for the symmetric case, we simply define the random variable , apply the same proof, and plug it into our bound.


See also

* Bernstein inequalities * Concentration inequality − a summary of tail-bounds on random variables. * Cramér's theorem * Entropic value at risk * Hoeffding's inequality *
Matrix Chernoff bound For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose \ is a finite sequence of random matrices. Analogous to the well-kn ...
* Moment generating function


References


Further reading

* * * * {{DEFAULTSORT:Chernoff Bound Probabilistic inequalities